Variational Boosting: Iteratively Refining Posterior Approximations
نویسندگان
چکیده
We propose a black-box variational inference method to approximate intractable distributions with an increasingly rich approximating class. Our method, variational boosting, iteratively refines an existing variational approximation by solving a sequence of optimization problems, allowing a trade-off between computation time and accuracy. We expand the variational approximating class by incorporating additional covariance structure and by introducing new components to form a mixture. We apply variational boosting to synthetic and real statistical models, and show that the resulting posterior inferences compare favorably to existing variational algorithms.
منابع مشابه
Boosting Variational Inference
Modern Bayesian inference typically requires some form of posterior approximation, and mean-field variational inference (MFVI) is an increasingly popular choice due to its speed. But MFVI can be inaccurate in various aspects, including an inability to capture multimodality in the posterior and underestimation of the posterior covariance. These issues arise since MFVI considers approximations to...
متن کاملVariational Inference with Normalizing Flows
The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variation...
متن کاملAn Introduction to Bayesian Inference Via Variational Approximations: Supplemental Notes
1.1 The Tractability-Fit Tradeoff in Variational Approximations The goal of a variational approximation is to approximate a posterior, p(β|Y ) by making an approximating distribution, q(β), as close as possible to the true posterior (Bishop, 2006). We search over the space of approximating distributions in order to find the particular distribution with the minimum KL-divergence with the actual ...
متن کاملIterative Refinement of Approximate Posterior for Training Directed Belief Networks
Deep directed graphical models, while a potentially powerful class of generative representations, are challenging to train due to difficult inference. Recent advances in variational inference that make use of an inference or recognition network have advanced well beyond traditional variational inference and Markov chain Monte Carlo methods. While these techniques offer higher flexibility as wel...
متن کاملImproved Variational Approximation for Bayesian PCA
As with most non-trivial models, an exact Bayesian treatment of the probabilistic PCA model (under a meaningful prior) is analytically intractable. Various approximations have therefore been proposed in the literature; these include approximations based on type-II maximum likelihood as well as variational approximations. In this document, we describe an improved variational approximation for Ba...
متن کامل